16 research outputs found

    A complete simulator for volunteer computing environments

    Get PDF
    Volunteer computing is a type of distributed computing in which ordinary people donate their idle computer time to science projects like SETI@home, Climateprediction.net and many others. BOINC provides a complete middleware system for volunteer computing, and it became generalized as a platform for distributed applications in areas as diverse as mathematics, medicine, molecular biology, climatology, environmental science, and astrophysics. In this document we present the whole development process of ComBoS, a complete simulator of the BOINC infrastructure. Although there are other BOINC simulators, our intention was to create a complete simulator that, unlike the existing ones, could simulate realistic scenarios taking into account the whole BOINC infrastructure, that other simulators do not consider: projects, servers, network, redundant computing, scheduling, and volunteer nodes. The output of the simulations allows us to analyze a wide range of statistical results, such as the throughput of each project, the number of jobs executed by the clients, the total credit granted and the average occupation of the BOINC servers. This bachelor thesis describes the design of ComBoS and the results of the validation performed. This validation compares the results obtained in ComBoS with the real ones of three different BOINC projects (Einstein@home, SETI@home and LHC@home). Besides, we analyze the performance of the simulator in terms of memory usage and execution time. This document also shows that our simulator can guide the design of BOINC projects, describing some case studies using ComBoS that could help designers verify the feasibility of BOINC projects.IngenierĂ­a InformĂĄtic

    Neutrino interaction classification with a convolutional neural network in the DUNE far detector

    Get PDF
    Documento escrito por un elevado nĂșmero de autores/as, solo se referencia el/la que aparece en primer lugar y los/as autores/as pertenecientes a la UC3M.The Deep Underground Neutrino Experiment is a next-generation neutrino oscillation experiment that aims to measure CP-violation in the neutrino sector as part of a wider physics program. A deep learning approach based on a convolutional neural network has been developed to provide highly efficient and pure selections of electron neutrino and muon neutrino charged-current interactions. The electron neutrino (antineutrino) selection efficiency peaks at 90% (94%) and exceeds 85% (90%) for reconstructed neutrino energies between 2-5 GeV. The muon neutrino (antineutrino) event selection is found to have a maximum efficiency of 96% (97%) and exceeds 90% (95%) efficiency for reconstructed neutrino energies above 2 GeV. When considering all electron neutrino and antineutrino interactions as signal, a selection purity of 90% is achieved. These event selections are critical to maximize the sensitivity of the experiment to CP-violating effects.This document was prepared by the DUNE Collaboration using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. This work was supported by CNPq, FAPERJ, FAPEG and FAPESP, Brazil; CFI, Institute of Particle Physics and NSERC, Canada; CERN; MĆ MT, Czech Republic; ERDF, H2020-EU and MSCA, European Union; CNRS/IN2P3 and CEA, France; INFN, Italy; FCT, Portugal; NRF, South Korea; Comunidad de Madrid, FundaciĂłn "La Caixa" and MICINN, Spain; State Secretariat for Education, Research and Innovation and SNSF, Switzerland; TÜBITAK, Turkey; The Royal Society and UKRI/STFC, United Kingdom; DOE and NSF, United States of America

    ComBos: a complete simulator of volunteer computing and desktop grids

    Get PDF
    Volunteer Computing is a type of distributed computing in which ordinary people donate their idle computer time to science projects like SETI@Home, Climateprediction.net and many others. In a similar way, Desktop Grid Computing is a form of distributed computing in which an organization uses its existing computers to handle its own long-running computational tasks. BOINC is the main middleware that provides a software platform for Volunteer Computing and desktop grid computing, and it became generalized as a platform for distributed applications in areas as diverse as mathematics, medicine, molecular biology, climatology, environmental science, and astrophysics. In this paper we present a complete simulator of BOINC infrastructures, called ComBoS. Although there are other BOINC simulators, none of them allow us to simulate the complete infrastructure of BOINC. Our goal was to create a complete simulator that, unlike the existing ones, could simulate realistic scenarios taking into account the whole BOINC infrastructure, that other simulators do not consider: projects, servers, network, redundant computing, scheduling, and volunteer nodes. The outputs of the simulations allow us to analyze a wide range of statistical results, such as the throughput of each project, the number of jobs executed by the clients, the total credit granted and the average occupation of the BOINC servers. The paper describes the design of ComBoS and the results of the validation performed. This validation compares the results obtained in ComBoS with the real ones of three different BOINC projects (Einstein@Home, SETI@Home and LHC@Home). Besides, we analyze the performance of the simulator in terms of memory usage and execution time. The paper also shows that our simulator can guide the design of BOINC projects, describing some case studies using ComBoS that could help designers verify the feasibility of BOINC projects. (C) 2017 Elsevier B.V. All rights reserved.This work has been partially supported by the Spanish MINISTERIO DE ECONOMÍA Y COMPETITIVIDAD under the project grant TIN2016-79637-P TOWARDS UNIFICATION OF HPC AND BIG DATA PARADIGMS

    A heterogeneous mobile cloud computing model for hybrid clouds

    Get PDF
    Mobile cloud computing is a paradigm that delivers applications to mobile devices by using cloud computing. In this way, mobile cloud computing allows for a rich user experience; since client applications run remotely in the cloud infrastructure, applications use fewer resources in the user's mobile devices. In this paper, we present a new mobile cloud computing model, in which platforms of volunteer devices provide part of the resources of the cloud, inspired by both volunteer computing and mobile edge computing paradigms. These platforms may be hierarchical, based on the capabilities of the volunteer devices and the requirements of the services provided by the clouds. We also describe the orchestration between the volunteer platform and the public, private or hybrid clouds. As we show, this new model can be an inexpensive solution to different application scenarios, highlighting its benefits in cost savings, elasticity, scalability, load balancing, and efficiency. Moreover, with the evaluation performed we also show that our proposed model is a feasible solution for cloud services that have a large number of mobile users. (C) 2018 Elsevier B.V. All rights reserved.This work has been partially supported by the Spanish MINISTERIO DE ECONOMÍA Y COMPETITIVIDAD under the project grant TIN2016-79637-P TOWARDS UNIFICATION OF HPC AND BIG DATA PARADIGMS

    A new volunteer computing model for data-intensive applications

    Get PDF
    Volunteer computing is a type of distributed computing in which ordinary people donate computing resources to scientific projects. BOINC is the main middleware system for this type of distributed computing. The aim of volunteer computing is that organizations be able to attain large computing power thanks to the participation of volunteer clients instead of a high investment in infrastructure. There are projects, like the ATLAS@Home project, in which the number of running jobs has reached a plateau, due to a high load on data servers caused by file transfer. This is why we have designed an alternative, using the same BOINC infrastructure, in order to improve the performance of BOINC projects that have reached their limit due to the I/O bottleneck in data servers. This alternative involves having a percentage of the volunteer clients running as data servers, called data volunteers, that improve the performance of the system by reducing the load on data servers. In addition, our solution takes advantage of data locality, leveraging the low network latencies of closer machines. This paper describes our alternative in detail and shows the performance of the solution, applied to 3 different BOINC projects, using a simulator of our own, ComBoS.Spanish MINISTERIO DE ECONOMÍA Y COMPETITIVIDAD, Grant/Award Number: TIN2016-79637-

    Artificial intelligence for improved fitting of trajectories of elementary particles in inhomogeneous dense materials immersed in a magnetic field

    Full text link
    In this article, we use artificial intelligence algorithms to show how to enhance the resolution of the elementary particle track fitting in inhomogeneous dense detectors, such as plastic scintillators. We use deep learning to replace more traditional Bayesian filtering methods, drastically improving the reconstruction of the interacting particle kinematics. We show that a specific form of neural network, inherited from the field of natural language processing, is very close to the concept of a Bayesian filter that adopts a hyper-informative prior. Such a paradigm change can influence the design of future particle physics experiments and their data exploitation

    Wepsim: an online interactive educational simulator integrating microdesign, microprogramming, and assembly language programming

    Get PDF
    Our educational project has three primary goals. First, we want to provide a robust vision of how hardware and software interplay, by integrating the design of an instruction set (through microprogramming) and using that instruction set for assembly programming. Second, we wish to offer a versatile and interactive tool where the previous integrated vision could be tested. The tool we have developed to achieve this is called WepSIM and it provides the view of an elemental processor together with a microprogrammed subset of the MIPS instruction set. In addition, WepSIM is flexible enough to be adapted to other instruction sets or hardware components (e.g., ARM or x86). Third, we want to extend the activities of our university courses, labs, and lectures (fixed hours in a fixed place), so that students may learn by using their mobile device at any location, and at any time during the day. This article presents how WepSIM has improved the teaching of Computer Structure courses by empowering students with a more dynamic and guided learning process. In this paper, we show the results obtained during the experience of using the simulator in the Computer Structure course of the Bachelor's Degree in Computer Science and Engineering at University Carlos III of Madrid

    Convolution on neural networks for high-frequency trend prediction of cryptocurrency exchange rates using technical indicators

    Get PDF
    This study explores the suitability of neural networks with a convolutional component as an alternative to traditional multilayer perceptrons in the domain of trend classification of cryptocurrency exchange rates using technical analysis in high frequencies. The experimental work compares the performance of four different network architectures -convolutional neural network, hybrid CNN-LSTM network, multilayer perceptron and radial basis function neural network- to predict whether six popular cryptocurrencies -Bitcoin, Dash, Ether, Litecoin, Monero and Ripple- will increase their value vs. USD in the next minute. The results, based on 18 technical indicators derived from the exchange rates at a one-minute resolution over one year, suggest that all series were predictable to a certain extent using the technical indicators. Convolutional LSTM neural networks outperformed all the rest significantly, while CNN neural networks were also able to provide good results specially in the Bitcoin, Ether and Litecoin cryptocurrencies.We would also like to acknowledge the financial support of the Spanish Ministry of Science, Innovation and Universities under grant PGC2018-096849-B-I00 (MCFin

    Graph neural network for 3D classification of ambiguities and optical crosstalk in scintillator-based neutrino detectors

    Get PDF
    Deep learning tools are being used extensively in high energy physics and are becoming central in the reconstruction of neutrino interactions in particle detectors. In this work, we report on the performance of a graph neural network in assisting with particle flow event reconstruction. The three-dimensional reconstruction of particle tracks produced in neutrino interactions can be subject to ambiguities due to high multiplicity signatures in the detector or leakage of signal between neighboring active detector volumes. Graph neural networks potentially have the capability of identifying all these features to boost the reconstruction performance. As an example case study, we tested a graph neural network, inspired by the GraphSAGE algorithm, on a novel 3D-granular plastic-scintillator detector, that will be used to upgrade the near detector of the T2K experiment. The developed neural network has been trained and tested on diverse neutrino interaction samples, showing very promising results: the classification of particle track voxels produced in the detector can be done with efficiencies and purities of 94-96% per event and most of the ambiguities can be identified and rejected, while being robust against systematic effects

    Image-based model parameter optimisation using Model-Assisted Generative Adversarial Networks

    No full text
    We propose and demonstrate the use of a model-assisted generative adversarial network (GAN) to produce fake images that accurately match true images through the variation of the parameters of the model that describes the features of the images. The generator learns the model parameter values that produce fake images that best match the true images. Two case studies show excellent agreement between the generated best match parameters and the true parameters. The best match model parameter values can be used to retune the default simulation to minimize any bias when applying image recognition techniques to fake and true images. In the case of a real-world experiment, the true images are experimental data with unknown true model parameter values, and the fake images are produced by a simulation that takes the model parameters as input. The model-assisted GAN uses a convolutional neural network to emulate the simulation for all parameter values that, when trained, can be used as a conditional generator for fast fake-image production
    corecore